今天要繼續昨天做過的部分,因此一開始需要昨天的程式碼
import pandas as pd
# 讀起資料
melbourne_file_path = './Dataset/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
# drop缺失值
filtered_melbourne_data = melbourne_data.dropna(axis=0)
# 選擇target和feature
y = filtered_melbourne_data.Price
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea',
'YearBuilt', 'Lattitude', 'Longtitude']
X = filtered_melbourne_data[melbourne_features]
from sklearn.tree import DecisionTreeRegressor
# 定義model
melbourne_model = DecisionTreeRegressor()
# Fit這個模型
melbourne_model.fit(X, y)
DecisionTreeRegressor(ccp_alpha=0.0, criterion='mse', max_depth=None,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort='deprecated',
random_state=None, splitter='best')
若資料為數值屬性,則可以取得每筆資料的誤差,全部相加後做平均,即可得到評估Model好壞的標準
from sklearn.metrics import mean_absolute_error
predicted_home_prices = melbourne_model.predict(X)
mean_absolute_error(y, predicted_home_prices)
434.71594577146544
若是驗證時使用的資料集和訓練時一樣,稱為in-sample
這種情況會造成計算出來的結果非常的好,但卻不貼近真實情況
因此需要將資料分為訓練資料以及測試資料
from sklearn.model_selection import train_test_split
# 切分資料成訓練資料和驗證資料,feature和target都要切分
# 這是使用隨機切分的,設定random_stated可以確保每次切分的資料都是一樣的
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state = 0)
# 定義model
melbourne_model = DecisionTreeRegressor()
# Fit這個模型
melbourne_model.fit(train_X, train_y)
# 得到驗證資料的結果
val_predictions = melbourne_model.predict(val_X)
print(mean_absolute_error(val_y, val_predictions))
266023.7023886378